显着对象检测(SOD)最近引起了人们的关注,但对高分辨率(HR)图像的研究较少。不幸的是,与低分辨率(LR)图像和注释相比,HR图像及其像素级注释肯定是更耗费劳动力和耗时的。因此,我们建议没有任何HR数据集的HR预测,建议基于图像金字塔的SOD框架,逆显着性金字塔重建网络(INSPYRENET)。我们设计了Inspyrenet,以产生严格的图像金字塔结构,使其能够将多个结果与基于金字塔的图像混合在一起。为了进行HR预测,我们设计了一种金字塔混合方法,该方法从同一图像中从一对LR和HR量表中合成了两个不同的图像金字塔,以克服有效的接受场(ERF)差异。我们对公共LR和HR SOD基准的广泛评估表明,Inspyrenet超过了各种SOD指标和边界准确性的最新方法(SOTA)方法。
translated by 谷歌翻译
Sentence summarization shortens given texts while maintaining core contents of the texts. Unsupervised approaches have been studied to summarize texts without human-written summaries. However, recent unsupervised models are extractive, which remove words from texts and thus they are less flexible than abstractive summarization. In this work, we devise an abstractive model based on reinforcement learning without ground-truth summaries. We formulate the unsupervised summarization based on the Markov decision process with rewards representing the summary quality. To further enhance the summary quality, we develop a multi-summary learning mechanism that generates multiple summaries with varying lengths for a given text, while making the summaries mutually enhance each other. Experimental results show that the proposed model substantially outperforms both abstractive and extractive models, yet frequently generating new words not contained in input texts.
translated by 谷歌翻译
Harmonic functions are abundant in nature, appearing in limiting cases of Maxwell's, Navier-Stokes equations, the heat and the wave equation. Consequently, there are many applications of harmonic functions, spanning applications from industrial process optimisation to robotic path planning and the calculation of first exit times of random walks. Despite their ubiquity and relevance, there have been few attempts to develop effective means of representing harmonic functions in the context of machine learning architectures, either in machine learning on classical computers, or in the nascent field of quantum machine learning. Architectures which impose or encourage an inductive bias towards harmonic functions would facilitate data-driven modelling and the solution of inverse problems in a range of applications. For classical neural networks, it has already been established how leveraging inductive biases can in general lead to improved performance of learning algorithms. The introduction of such inductive biases within a quantum machine learning setting is instead still in its nascent stages. In this work, we derive exactly-harmonic (conventional- and quantum-) neural networks in two dimensions for simply-connected domains by leveraging the characteristics of holomorphic complex functions. We then demonstrate how these can be approximately extended to multiply-connected two-dimensional domains using techniques inspired by domain decomposition in physics-informed neural networks. We further provide architectures and training protocols to effectively impose approximately harmonic constraints in three dimensions and higher, and as a corollary we report divergence-free network architectures in arbitrary dimensions. Our approaches are demonstrated with applications to heat transfer, electrostatics and robot navigation, with comparisons to physics-informed neural networks included.
translated by 谷歌翻译
We propose Universal Document Processing (UDOP), a foundation Document AI model which unifies text, image, and layout modalities together with varied task formats, including document understanding and generation. UDOP leverages the spatial correlation between textual content and document image to model image, text, and layout modalities with one uniform representation. With a novel Vision-Text-Layout Transformer, UDOP unifies pretraining and multi-domain downstream tasks into a prompt-based sequence generation scheme. UDOP is pretrained on both large-scale unlabeled document corpora using innovative self-supervised objectives and diverse labeled data. UDOP also learns to generate document images from text and layout modalities via masked image reconstruction. To the best of our knowledge, this is the first time in the field of document AI that one model simultaneously achieves high-quality neural document editing and content customization. Our method sets the state-of-the-art on 9 Document AI tasks, e.g., document understanding and QA, across diverse data domains like finance reports, academic papers, and websites. UDOP ranks first on the leaderboard of the Document Understanding Benchmark (DUE).
translated by 谷歌翻译
We tackle open-world semantic segmentation, which aims at learning to segment arbitrary visual concepts in images, by using only image-text pairs without dense annotations. Existing open-world segmentation methods have shown impressive advances by employing contrastive learning (CL) to learn diverse visual concepts and adapting the learned image-level understanding to the segmentation task. However, these methods based on CL have a discrepancy since it only considers image-text level alignment in training time, while the segmentation task requires region-text level alignment at test time. In this paper, we propose a novel Text-grounded Contrastive Learning (TCL) framework to directly align a text and a region described by the text to address the train-test discrepancy. Our method generates a segmentation mask associated with a given text, extracts grounded image embedding from the masked region, and aligns it with text embedding via TCL. The framework addresses the discrepancy by letting the model learn region-text level alignment instead of image-text level alignment and encourages the model to directly improve the quality of generated segmentation masks. In addition, for a rigorous and fair comparison, we present a unified evaluation protocol with widely used 8 semantic segmentation datasets. TCL achieves state-of-the-art zero-shot segmentation performance with large margins in all datasets. Code is available at https://github.com/kakaobrain/tcl.
translated by 谷歌翻译
Routine clinical visits of a patient produce not only image data, but also non-image data containing clinical information regarding the patient, i.e., medical data is multi-modal in nature. Such heterogeneous modalities offer different and complementary perspectives on the same patient, resulting in more accurate clinical decisions when they are properly combined. However, despite its significance, how to effectively fuse the multi-modal medical data into a unified framework has received relatively little attention. In this paper, we propose an effective graph-based framework called HetMed (Heterogeneous Graph Learning for Multi-modal Medical Data Analysis) for fusing the multi-modal medical data. Specifically, we construct a multiplex network that incorporates multiple types of non-image features of patients to capture the complex relationship between patients in a systematic way, which leads to more accurate clinical decisions. Extensive experiments on various real-world datasets demonstrate the superiority and practicality of HetMed. The source code for HetMed is available at https://github.com/Sein-Kim/Multimodal-Medical.
translated by 谷歌翻译
We present HOReeNet, which tackles the novel task of manipulating images involving hands, objects, and their interactions. Especially, we are interested in transferring objects of source images to target images and manipulating 3D hand postures to tightly grasp the transferred objects. Furthermore, the manipulation needs to be reflected in the 2D image space. In our reenactment scenario involving hand-object interactions, 3D reconstruction becomes essential as 3D contact reasoning between hands and objects is required to achieve a tight grasp. At the same time, to obtain high-quality 2D images from 3D space, well-designed 3D-to-2D projection and image refinement are required. Our HOReeNet is the first fully differentiable framework proposed for such a task. On hand-object interaction datasets, we compared our HOReeNet to the conventional image translation algorithms and reenactment algorithm. We demonstrated that our approach could achieved the state-of-the-art on the proposed task.
translated by 谷歌翻译
Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities. Previous work addressing privacy issues for language models has mostly focused on data preprocessing and differential privacy methods, both requiring re-training the underlying LM. We propose knowledge unlearning as an alternative method to reduce privacy risks for LMs post hoc. We show that simply performing gradient ascent on target token sequences is effective at forgetting them with little to no degradation of general language modeling performances for larger LMs; it sometimes even substantially improves the underlying LM with just a few iterations. We also find that sequential unlearning is better than trying to unlearn all the data at once and that unlearning is highly dependent on which kind of data (domain) is forgotten. By showing comparisons with a previous data preprocessing method and a decoding method known to mitigate privacy risks for LMs, we show that unlearning can give a stronger empirical privacy guarantee in scenarios where the data vulnerable to extraction attacks are known a priori while being much more efficient and robust. We release the code and dataset needed to replicate our results at https://github.com/joeljang/knowledge-unlearning.
translated by 谷歌翻译
顺序推荐系统通过捕获用户的兴趣漂移来显示有效的建议。有两组现有的顺序模型:以用户和项目为中心的模型。以用户为中心的模型根据每个用户的顺序消费历史记录来捕获个性化的利息漂移,但没有明确考虑用户对项目的利益是否超出培训时间,即利息可持续性。另一方面,以项目为中心的模型考虑了用户在培训时间后的一般利益是否维持,但不是个性化的。在这项工作中,我们提出了一个推荐系统,将两类模型的优势占据优势。我们提出的模型捕获了个性化的利息可持续性,表明每个用户对物品的利益是否会超出培训时间。我们首先制定一项任务,该任务需要根据用户的消费历史记录预测培训时间中每个用户将消耗哪些项目。然后,我们提出简单而有效的方案,以增强用户的稀疏消费历史记录。广泛的实验表明,所提出的模型在11个现实世界数据集上的表现优于10个基线模型。这些代码可在https://github.com/dmhyun/peris上找到。
translated by 谷歌翻译
预测交通状况非常具有挑战性,因为每条道路在空间和时间上都高度依赖。最近,为了捕获这种空间和时间依赖性,已经引入了专门设计的架构,例如图形卷积网络和时间卷积网络。尽管流量预测取得了显着进展,但我们发现基于深度学习的流量预测模型仍然在某些模式中失败,主要是在事件情况下(例如,快速速度下降)。尽管通常认为这些故障是由于不可预测的噪声造成的,但我们发现可以通过考虑以前的失败来纠正这些故障。具体而言,我们观察到这些失败中的自相关错误,这表明仍然存在一些可预测的信息。在这项研究中,为了捕获错误的相关性,我们引入了Rescal,Rescal是流量预测的剩余估计模块,作为广泛适用的附加模块,用于现有的流量预测模型。我们的恢复通过使用以前的错误和图形信号来估算未来错误,从而实时校准现有模型的预测。对METR-LA和PEMS-BAY进行的广泛实验表明,我们的恢复可以正确捕获错误的相关性,并在事件情况下纠正各种流量预测模型的故障。
translated by 谷歌翻译